Although self-/un-supervised methods have led to rapid progress in visual representation learning, these methods generally treat objects and scenes using the same lens. In this paper, we focus on learning representations for objects and scenes that preserve the structure among them. Motivated by the observation that visually similar objects are close in the representation space, we argue that the scenes and objects should instead follow a hierarchical structure based on their compositionality. To exploit such a structure, we propose a contrastive learning framework where a Euclidean loss is used to learn object representations and a hyperbolic loss is used to encourage representations of scenes to lie close to representations of their constituent objects in a hyperbolic space. This novel hyperbolic objective encourages the scene-object hypernymy among the representations by optimizing the magnitude of their norms. We show that when pretraining on the COCO and OpenImages datasets, the hyperbolic loss improves downstream performance of several baselines across multiple datasets and tasks, including image classification, object detection, and semantic segmentation. We also show that the properties of the learned representations allow us to solve various vision tasks that involve the interaction between scenes and objects in a zero-shot fashion. Our code can be found at \url{https://github.com/shlokk/HCL/tree/main/HCL}.
translated by 谷歌翻译
Recent visuolinguistic pre-trained models show promising progress on various end tasks such as image retrieval and video captioning. Yet, they fail miserably on the recently proposed Winoground dataset, which challenges models to match paired images and English captions, with items constructed to overlap lexically but differ in meaning (e.g., "there is a mug in some grass" vs. "there is some grass in a mug"). By annotating the dataset using new fine-grained tags, we show that solving the Winoground task requires not just compositional language understanding, but a host of other abilities like commonsense reasoning or locating small, out-of-focus objects in low-resolution images. In this paper, we identify the dataset's main challenges through a suite of experiments on related tasks (probing task, image retrieval task), data augmentation, and manual inspection of the dataset. Our analysis suggests that a main challenge in visuolinguistic models may lie in fusing visual and textual representations, rather than in compositional language understanding. We release our annotation and code at https://github.com/ajd12342/why-winoground-hard .
translated by 谷歌翻译
从敏感数据中学习时,必须注意确保培训算法解决隐私问题。教师合奏或PATE的规范私人聚合通过通过投票机制汇总(可能分布的)教师模型集合的预测来计算输出标签。该机制增加了噪音,以获得有关教师培训数据的差异隐私保证。在这项工作中,我们观察到这种噪声的使用(使PATE预测随机)可以实现敏感信息的新形式。对于给定的输入,我们的对手利用这种随机性来提取基础教师提交的投票的高保真直方图。从这些直方图中,对手可以学习输入的敏感属性,例如种族,性别或年龄。尽管这次攻击并没有直接违反差异隐私保证,但它显然违反了隐私规范和期望,如果没有插入差异隐私的噪音,就根本不可能。实际上,违反直觉,随着我们添加更多噪音以提供更强的差异隐私,攻击变得更加容易。我们希望这鼓励未来的工作从整体上考虑隐私,而不是将差异隐私视为灵丹妙药。
translated by 谷歌翻译
深度神经网络的3D语义分割的最新进展已取得了显着的成功,并且可用数据集的性能快速提高。但是,当前的3D语义分割基准仅包含少数类别 - 例如,扫描仪和semantickitti少于30个类别,这些类别不足以反映真实环境的多样性(例如,语义图像涵盖数百到数千个类别的类别)。因此,我们建议研究3D语义分割的较大词汇,并在扫描仪数据上具有新的扩展基准测试,其中有200个类别类别,比以前研究的数量级要多。大量的类别类别也引起了巨大的自然级别不平衡,这两者对于现有的3D语义分割方法都具有挑战性。为了在这种情况下了解更多强大的3D功能,我们提出了一种以语言为导向的预训练方法来鼓励学习的3D功能,该方法可能有限的培训示例以靠近其预训练的文本嵌入。广泛的实验表明,我们的方法始终优于我们所提出的基准测试( +9%相对MIOU)的3D语义分割的最先进的3D预训练,包括仅使用5%的 +25%相对MIOU的有限数据方案注释。
translated by 谷歌翻译
溶剂基碳捕获系统(CCSS)中的CO2捕获效率尺寸依赖性取决于气体溶剂界面(IA),使IA在CCS设计中的基础攻击最大化。虽然可以通过计算流体动力学(CFD)仿真估计与特定CCS设计的IA,但是使用CFD导出与许多CCS设计相关的IAS,这是昂贵的。幸运的是,以前的工作(如深液)(DF)(Kim等人,2019)表明,通过用神经网络(NN)代理商兑忠实地模仿CFD仿真过程的CFD模拟器来实现大型仿真加速度。这提高了对CFD模拟器的快速,准确更换的可能性,从而有效地逼近CCS设计优化所需的IAS。因此,在这里,我们建立在DF方法中,以开发成功应用于我们复杂的碳捕获CFD模拟的代理。我们优化的DF样式替代商会产生大型加速(4000X),同时获得位于训练配置范围内的未见CCS配置中的IA相对误差低至4%。这提示了NN代理人的CCS设计优化问题的承诺。尽管如此,DF对CCS设计具有固有的局限性(例如,培训模型的有限可转换性至新CCS填料)。我们与思想结束以解决这些挑战。
translated by 谷歌翻译
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
translated by 谷歌翻译
Large, labeled datasets have driven deep learning methods to achieve expert-level performance on a variety of medical imaging tasks. We present CheXpert, a large dataset that contains 224,316 chest radiographs of 65,240 patients. We design a labeler to automatically detect the presence of 14 observations in radiology reports, capturing uncertainties inherent in radiograph interpretation. We investigate different approaches to using the uncertainty labels for training convolutional neural networks that output the probability of these observations given the available frontal and lateral radiographs. On a validation set of 200 chest radiographic studies which were manually annotated by 3 board-certified radiologists, we find that different uncertainty approaches are useful for different pathologies. We then evaluate our best model on a test set composed of 500 chest radiographic studies annotated by a consensus of 5 board-certified radiologists, and compare the performance of our model to that of 3 additional radiologists in the detection of 5 selected pathologies. On Cardiomegaly, Edema, and Pleural Effusion, the model ROC and PR curves lie above all 3 radiologist operating points. We release the dataset to the public as a standard benchmark to evaluate performance of chest radiograph interpretation models. 1
translated by 谷歌翻译
This paper proposes a novel kernel-based optimization scheme to handle tasks in the analysis, e.g., signal spectral estimation and single-channel source separation of 1D non-stationary oscillatory data. The key insight of our optimization scheme for reconstructing the time-frequency information is that when a nonparametric regression is applied on some input values, the output regressed points would lie near the oscillatory pattern of the oscillatory 1D signal only if these input values are a good approximation of the ground-truth phase function. In this work, Gaussian Process (GP) is chosen to conduct this nonparametric regression: the oscillatory pattern is encoded as the Pattern-inducing Points (PiPs) which act as the training data points in the GP regression; while the targeted phase function is fed in to compute the correlation kernels, acting as the testing input. Better approximated phase function generates more precise kernels, thus resulting in smaller optimization loss error when comparing the kernel-based regression output with the original signals. To the best of our knowledge, this is the first algorithm that can satisfactorily handle fully non-stationary oscillatory data, close and crossover frequencies, and general oscillatory patterns. Even in the example of a signal {produced by slow variation in the parameters of a trigonometric expansion}, we show that PiPs admits competitive or better performance in terms of accuracy and robustness than existing state-of-the-art algorithms.
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译